631 research outputs found

    Document recognition of printed scores and transformation into MIDI

    Get PDF
    The processing of printed music pieces on paper images is an interesting application to analyze printed information by a computer. The music notation presented on paper should be recognized and reproduced. Numerous methods of image processing and knowledge-based procedures are necessary. The DOREMIDI System allows the processing of simple piano music pieces for two hands characterized by the following steps: - Scanning paper images - Processing of binary image data into basic components - Knowledge-based analysis and symbolic representation of a musical score - Visual and acoustic reproduction of the results. DOREMIDI has been realized on a Macintosh II, using Common-Lisp (Clos) programming language. The user interface is equivalent to the common Macintosh-interface, which enables in an uncomplicated way to use windows and menus. A keyboard presents the results of the acoustical reproduction

    Are we Ready to Embrace the Semantic Web?

    Get PDF
    The aim of the semantic web is to describe resources available on the web using metadata elements that can be processed or interpreted by machines. MPEG-7 is the result of a standardisation effort to annotate multimedia documents, and it offers a rich suite of metadata descriptors for describing these documents at various levels of abstraction from low-level features to high-level semantics. Owing to the proliferation of multimedia content in the internet, there is now a lot of interest in the semantic web community in multimedia metadata standards in general, and MPEG-7 in particular. Despite the fact that the semantic web initiatives could benefit a lot from MPEG-7 for the annotation of multimedia documents, recent studies have underlined the limitations of MPEG-7 in describing the semantics of highly structured domains like sports or medicine. This has led to an upsurge of interest in adopting an integrated approach to the design of multimedia ontologies. In our work, we describe a systematic approach to the design of multimedia ontologies in which we use MPEG-7 to model only the structural and the low-level aspects of multimedia documents. High-level semantics are described using domain-specific vocabularies. A retrieval engine based on this framework will then be able to process high-level text-based semantic queries. Whilst a lot of research has been done in the design of multimedia ontologies, a plaguing issue is the automatic annotation of multimedia content at a semantic level. It is possible to derive low-level descriptors using state-of-the-art techniques in multimedia content analysis, but the same does not hold true when it comes to analysing multimedia content at a high level of abstraction. We discuss various approaches that have been recently proposed to accomplish this task. An interesting line of discussion is the automatic population and enrichment of multimedia ontologies that offers a lot of challenges and stresses the need for efficient approaches for the semantic analysis of multimedia documents

    Ökologischer Ausgleich auf dem Dach : Vegetation und bodenbrütende Vögel

    Get PDF
    Schlussbericht 2009The research project «Ground nesting birds on greened roofs» aims to develop new techniques for roof greening systems and installation concepts for extensive greened roof areas. The results will help to provide alternative habitats offering areas of ecological compensation in line with Article 18 of the Nature and Homeland Protection Regulations, specially devised to benefit ground nesting birds. The project will develop guidelines for communities and cantons to enable them to plan the efficient use of flat roof surfaces through appropriate greening measures

    PatchIndex: exploiting approximate constraints in distributed databases

    Get PDF
    Cloud data warehouse systems lower the barrier to access data analytics. These applications often lack a database administrator and integrate data from various sources, potentially leading to data not satisfying strict constraints. Automatic schema optimization in self-managing databases is difficult in these environments without prior data cleaning steps. In this paper, we focus on constraint discovery as a subtask of schema optimization. Perfect constraints might not exist in these unclean datasets due to a small set of values violating the constraints. Therefore, we introduce the concept of a generic PatchIndex structure, which handles exceptions to given constraints and enables database systems to define these approximate constraints. We apply the concept to the environment of distributed databases, providing parallel index creation approaches and optimization techniques for parallel queries using PatchIndexes. Furthermore, we describe heuristics for automatic discovery of PatchIndex candidate columns and prove the performance benefit of using PatchIndexes in our evaluation

    I tetti verdi di tipo estensivo : biodiversitĂ  ad alta quota

    Get PDF
    Le città sono state paragonate da Odum a degli organismi eterotrofi che basano la loro crescita ed espansione sull’uso indiscriminato di risorse e sono causa di perdita irreversibile e frammentazione degli habitat naturali. I tetti verdi rappresentano uno strumento essenziale di mitigazione e compensazione ambientale all’interno del tessuto urbano dove, l’alta densità edilizia e l’elevato disturbo antropico concedono poco spazio alle dinamiche naturali. In particolare, i tetti verdi per la biodiversità caratterizzati da mosaici di micro-habitat diversi e contigui tra loro, possano ospitare specie con caratteristiche morfo-funzionali diverse. L’approccio noto come habitat template, consiste nel selezionare le specie vegetali adatte alle condizioni sui tetti estensivi tra quelle che in natura crescono in condizioni simili e.g. spessore di substrato ridotto e povero di sostanze nutritive, lunghi periodi di aridità. L’approccio fitosociologico prevede l’individuazione di habitat analoghi non solo come riferimento da cui estrapolare le specie vegetali, ma come modello per aggregare le piante in consorzi specifici, così come suggerisce l’interpretazione fitosociologica della natura.Cities are defined as heterotrophic organisms (Odum, 1983) as they consumes a huge amount of resources and causes habitat loss and fragmentation. Green roofs represents a fundamental mean of ecological compensation within the built environment, i.e. in highly altered and disturbed places by humans. In particular, green roofs for biodiversity (or biodiversity green roofs), being characterised by different but contiguous microhabitat (habitat mosaics or patches), can host several species with different functional traits. The methods known as habitat template consists of choosing suitable plant species for green roofs among the one that live in nature under similar conditions e.g. shallow and nutrient poor substrate, high drought. The phytosociological approach considers habitat analogue not only as species pools, but also as a model to group plants in specific associations as suggested in the phytosociological interpretation of nature

    Automatic Schema Design for Co-Clustered Tables

    Get PDF
    Schema design of analytical workloads provides opportunities to index, cluster, partition and/or materialize. With these opportunities also the complexity of finding the right setup rises. In this paper we present an automatic schema design approach for a table co-clustering scheme called Bitwise Dimensional Co-Clustering, aimed at schemas with a moderate amount dimensions, but not limited to typical star and snowflake schemas. The goal is to design one primary schema and keep the knobs to turn to a minimum while providing a robust schema for a wide range of queries. In our approach a clustered schema is derived by trying to apply dimensions throughout the whole schema and co-cluster as many tables as possible according to at least one common dimension. Our approach is based on the assumption that initially foreign key relationships and a set of dimensions are defined based on classic DDL

    Mapping cropland-use intensity across Europe using MODIS NDVI time series

    Get PDF
    Global agricultural production will likely need to increase in the future due to population growth, changing diets, and the rising importance of bioenergy. Intensifying already existing cropland is often considered more sustainable than converting more natural areas. Unfortunately, our understanding of cropping patterns and intensity is weak, especially at broad geographic scales. We characterized and mapped cropping systems in Europe, a region containing diverse cropping systems, using four indicators: (a) cropping frequency (number of cropped years), (b) multi-cropping (number of harvests per year), (c) fallow cycles, and (d) crop duration ratio (actual time under crops) based on the MODIS Normalized Difference Vegetation Index (NDVI) time series from 2000 to 2012. Second, we used these cropping indicators and self-organizing maps to identify typical cropping systems. The resulting six clusters correspond well with other indicators of agricultural intensity (e.g., nitrogen input, yields) and reveal substantial differences in cropping intensity across Europe. Cropping intensity was highest in Germany, Poland, and the eastern European Black Earth regions, characterized by high cropping frequency, multi-cropping and a high crop duration ratio. Contrarily, we found lowest cropping intensity in eastern Europe outside the Black Earth region, characterized by longer fallow cycles. Our approach highlights how satellite image time series can help to characterize spatial patterns in cropping intensity—information that is rarely surveyed on the ground and commonly not included in agricultural statistics: our clustering approach also shows a way forward to reduce complexity when measuring multiple indicators. The four cropping indicators we used could become part of continental-scale agricultural monitoring in order to identify target regions for sustainable intensification, where trade-offs between intensification and the environmental should be explored.Peer Reviewe

    Left ventricular apical thrombus after systemic thrombolysis with recombinant tissue plasminogen activator in a patient with acute ischemic stroke

    Get PDF
    BACKGROUND: Thrombolysis with recombinant tissue plasminogen activator (rtPA) is an established treatment in acute stroke. To prevent rethrombosis after rtPA therapy, secondary anticoagulation with heparin is commonly performed. However, the recommended time-point and extent of heparin treatment vary and are not well investigated. CASE PRESENTATION: We report a 61-year-old man who developed an acute global aphasia and right-sided hemiparesis. Cranial CT was normal and systemic thrombolytic therapy with tPA was started 120 minutes after symptom onset. Low-dose subcutaneous heparin treatment was initiated 24 hours later. Transthoracic echocardiography (TTE) 12 hours after admission showed slightly reduced left ventricular ejection fraction (LVEF) but was otherwise normal. 48 hours later the patient suddenly deteriorated with clinical signs of dyspnea and tachycardia. TTE revelead a large left ventricular apical thrombus as well as a reduction of LVEF to 20 %. Serial further TTE investigations demonstrated a complete resolution of the thrombus and normalisation of LVEF within two days. CONCLUSION: Our case demonstrates an intracardiac thrombus formation following rtPA treatment of acute stroke, probably caused by secondary hypercoagulability. Rethrombosis or new thrombus formation might be an underestimated complication of rtPA therapy and potentially explain cases of secondary stroke progression

    Architektur für ein System zur Dokumentanalyse im Unternehmenskontext - Integration von Datenbeständen, Aufbau- und Ablauforganisation

    Get PDF
    Workflowmanagementsysteme werden im Bürobereich verstärkt zur effizienten Geschäftsprozeßabwicklung eingesetzt. Das bereits Mitte der 70er Jahre propagierte papierlose Büro bleibt jedoch gegenwärtig immer noch Utopie. Dieser Widerspruch liegt darin begründet, daß die Handhabung von papierintensiven Vorgängen in hohem Maße abhängig ist von einer Identifkation und Aufbereitung der in den Dokumenten enthaltenen Informationen. Allerdings müssen solche Daten z.B. bei eingehender Post immer noch von Hand eingegeben werden. In diesem Dokument wird die Architektur eines System vorgestellt, das diesen Medienbruch überwinden soll. Techniken aus dem Gebiet der Dokumentanalyse und des Dokumentverstehens werden in den Workftowkontext integriert und nutzen das dort verfügbare Wissen zur Steigerung der Erkennungsqualität. Das Architekturdokument beruht auf einer ebenfalls dokumentierten Anforderungsanalyse (DFKI Dokument D-97-05). Es enthält eine statische und eine dynamische Beschreibung der benötigten Klassenkategorien und erklärt deren Funktionalität anhand eines umfassenden Beispiels
    • …
    corecore